VLSI Implementation of Neural Networks

نویسندگان

  • Bogdan M. Wilamowski
  • J. Binfet
  • M. Okay Kaynak
چکیده

Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation...

متن کامل

Implementation of a programmable neuron in CNTFET technology for low-power neural networks

Circuit-level implementation of a novel neuron has been discussed in this article. A low-power Activation Function (AF) circuit is introduced in this paper, which is then combined with a highly linear synapse circuit to form the neuron architecture. Designed in Carbon Nanotube Field-Effect Transistor (CNTFET) technology, the proposed structure consumes low power, which makes it suitable for the...

متن کامل

VLSI Implementation of Neural Network

This paper proposes a novel approach for an optimal multi-objective optimization for VLSI implementation of Artificial Neural Network (ANN) which is area-power-speed efficient and has high degree of accuracy and dynamic range. A VLSI implementation of feed forward neural network in floating point arithmetic IEEE-754 single precision 32 bit format is presented that makes the use of digital weigh...

متن کامل

A High-Storage Capacity Content-Addressable Memory and Its Learning Algorithm

Abstrud -Hopfield’s neural networks show retrieval and speed capabilities that make them good candidates for content-addressable memories (CAM’s) in problems such as pattern recognition and optimization. This paper presents a new implementation of a VLSI fully interconnected neural network with only two binary memory points per synapse (the connection weights are restricted to three different v...

متن کامل

CMOS VLSI Hyperbolic Tangent Function & its Derivative Circuits for Neuron Implementation

The hyperbolic tangent function and its derivative are key essential element in analog signal processing and especially in analog VLSI implementation of neuron of artificial neural networks. The main conditions of these types of circuits are the small silicon area, and the low power consumption. The objective of this paper is to study and design CMOS VLSI hyperbolic tangent function and its der...

متن کامل

Design and Non-linear Modelling of CMOS Multipliers for Analog VLSI Implementation of Neural Algorithms

The analog VLSI implementation looks an attractive way for implementing Artiicial Neural Networks; in fact, it gives small area, low power consumption and compact design of neural computational primitive circuits. On the other hand, major drawbacks result to be the low computational accuracy and the non-linear behaviour of analog circuits. In this paper, we present the design and the detailed b...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • International journal of neural systems

دوره 10 3  شماره 

صفحات  -

تاریخ انتشار 2000